Goto

Collaborating Authors

 remark 1






Notes on Kernel Methods in Machine Learning

Pérez-Rosero, Diego Armando, Salazar-Dubois, Danna Valentina, Lugo-Rojas, Juan Camilo, Álvarez-Meza, Andrés Marino, Castellanos-Dominguez, Germán

arXiv.org Artificial Intelligence

These notes provide a self-contained introduction to kernel methods and their geometric foundations in machine learning. Starting from the construction of Hilbert spaces, we develop the theory of positive definite kernels, reproducing kernel Hilbert spaces (RKHS), and Hilbert-Schmidt operators, emphasizing their role in statistical estimation and representation of probability measures. Classical concepts such as covariance, regression, and information measures are revisited through the lens of Hilbert space geometry. We also introduce kernel density estimation, kernel embeddings of distributions, and the Maximum Mean Discrepancy (MMD). The exposition is designed to serve as a foundation for more advanced topics, including Gaussian processes, kernel Bayesian inference, and functional analytic approaches to modern machine learning.


Computations and ML for surjective rational maps

Karzhemanov, Ilya

arXiv.org Artificial Intelligence

The present note studies \emph{surjective rational endomorphisms} $f: \mathbb{P}^2 \dashrightarrow \mathbb{P}^2$ with \emph{cubic} terms and the indeterminacy locus $I_f \ne \emptyset$. We develop an experimental approach, based on some Python programming and Machine Learning, towards the classification of such maps; a couple of new explicit $f$ is constructed in this way. We also prove (via pure projective geometry) that a general non-regular cubic endomorphism $f$ of $\mathbb{P}^2$ is surjective if and only if the set $I_f$ has cardinality at least $3$.


we will extend the submission with discussions from below. 2

Neural Information Processing Systems

We thank the reviewers for their insightful comments. In this rebuttal, we respond to remarks from reviews. Remark 1 The work lacks discussion about the comparison of interpretability with BSP-Net. Moreover, their CSG structure is fixed by definition. CSG trees for different instances (see Figure on the right). Remark 2 Only a single instance of CSG visualization for each class is shown.




Finite-Dimensional Gaussian Approximation for Deep Neural Networks: Universality in Random Weights

Balasubramanian, Krishnakumar, Ross, Nathan

arXiv.org Machine Learning

Typically, each layer also includes bias parameters; however, setting them to zero does not affect our results and is therefore omitted--see Remark 1.5. Our main result establishes Gaussian approximation bounds, in the Wasserstein-1 distance, between the finite-dimensional distributions (FDDs) of wide neural networks and their Gaussian process limits, under general weight distributions satisfying mild moment conditions and assuming a Lipschitz activation function. Neal (1996) showed that the output distribution of a single hidden-layer neural network converges to a Gaussian in the infinite-width limit.